Combining Multi-modal Features for Social Media Analysis

نویسندگان

  • Spiros Nikolopoulos
  • Eirini Giannakidou
  • Yiannis Kompatsiaris
  • Ioannis Patras
  • Athena Vakali
چکیده

In this chapter we discuss methods for efficiently modeling the diverse information carried by social media. The problem is viewed as a multi-modal analysis process where specialized techniques are used to overcome the obstacles arising from the heterogeneity of data. Focusing at the optimal combination of low-level features (i.e., early fusion), we present a bio-inspired algorithm for feature selection that weights the features based on their appropriateness to represent a resource. Under the same objective of optimal feature combination we also examine the use of pLSA-based aspect models, as the means to define a latent semantic space where heterogeneous types of information can be effectively combined. Tagged images taken from social sites have been used in the characteristic scenarios of image clustering and retrieval, to demonstrate the benefits of multi-modal analysis in social media. Spiros Nikolopoulos Informatics & Telematics Institute, Thermi, Thessaloniki, Greece and School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, UK, e-mail: [email protected] Eirini Giannakidou Informatics & Telematics Institute, Thermi, Thessaloniki, Greece and Department of Computer Science, Aristotle University of Thessaloniki, Greece, e-mail: [email protected] Ioannis Kompatsiaris Informatics & Telematics Institute, Thermi, Thessaloniki, Greece, e-mail: [email protected] Ioannis Patras School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, UK Tel. +44 20 7882 7523, Fax: +44 20 7882 7997, e-mail: [email protected] Athena Vakali Department of Computer Science, Aristotle University of Thessaloniki, Greece e-mail: [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity

Part I Social Media Content Analysis Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Aixin Sun and Sourav S. Bhowmick Tag-Based Social Image Search: Toward Relevant and Diverse Results . . 25 Kuiyuan Yang, Meng Wang, Xian-Sheng Hua, and Hong-Jiang Zhang Social Image Tag Ranking by Two-View Learning . . ...

متن کامل

Social Event Detection Via Sparse Multi-modal Feature Selection and Incremental Density Based Clustering

Combining items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information now made available on the social web. This task is made challenging due to the scale of the streams and the inherently multimodal nature of the information to be contextualised. We present a methodology which...

متن کامل

Hybridization of Facial Features and Use of Multi Modal Information for 3D Face Recognition

Despite of achieving good performance in controlled environment, the conventional 3D face recognition systems still encounter problems in handling the large variations in lighting conditions, facial expression and head pose The humans use the hybrid approach to recognize faces and therefore in this proposed method the human face recognition ability is incorporated by combining global and local ...

متن کامل

Multi-modal Deep Learning Approach for Flood Detection

In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain somemetadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the ...

متن کامل

Towards an intelligent framework for multimodal affective data analysis

An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011